Goto

Collaborating Authors

 pauli measurement


Sequence-Model-Guided Measurement Selection for Quantum State Learning

Huang, Jiaxin, Zhu, Yan, Chiribella, Giulio, Wu, Ya-Dong

arXiv.org Artificial Intelligence

Machine learning provides a powerful tool for characterizing quantum systems based on measurement data [1-40]. In particular, deep neural networks have played an important role across a range of tasks, including quantum state reconstruction [7-16], quantum similarity testing [17, 20, 37], prediction of quantum entanglement [21, 24, 40], and state classification [25-33]. Recent progress has enabled sequence models to predict diverse quantum properties of scalable quantum systems, by modeling the measurement outcome distributions [18, 19, 22, 23, 39, 41]. An important question in quantum state learning is how to choose the appropriate measurements to gather information about an unknown quantum state. While an optimized adaptive choice can be found for small quantum systems [42-44], a full optimization quickly becomes intractable as the size of the system grows large. For scalable quantum systems, a widespread approach is to employ randomized measurements [45-51]. This approach enables the estimation of a wide range of observables without performing a full tomography of the quantum state, which is not feasible for large quantum systems. When prior knowledge is available, the randomized measurement choices can be further optimized [52-54]. In general, however, determining the optimal distributions is computationally challenging for large-scale quantum systems, especially when an approximated classical description is lacking.


Classical Shadows with Improved Median-of-Means Estimation

Fu, Winston, Koh, Dax Enshan, Goh, Siong Thye, Kong, Jian Feng

arXiv.org Machine Learning

The classical shadows protocol, introduced by Huang et al. [Nat. Phys. 16, 1050 (2020)], makes use of the median-of-means (MoM) estimator to efficiently estimate the expectation values of $M$ observables with failure probability $\delta$ using only $\mathcal{O}(\log(M/\delta))$ measurements. In their analysis, Huang et al. used loose constants in their asymptotic performance bounds for simplicity. However, the specific values of these constants can significantly affect the number of shots used in practical implementations. To address this, we studied a modified MoM estimator proposed by Minsker [PMLR 195, 5925 (2023)] that uses optimal constants and involves a U-statistic over the data set. For efficient estimation, we implemented two types of incomplete U-statistics estimators, the first based on random sampling and the second based on cyclically permuted sampling. We compared the performance of the original and modified estimators when used with the classical shadows protocol with single-qubit Clifford unitaries (Pauli measurements) for an Ising spin chain, and global Clifford unitaries (Clifford measurements) for the Greenberger-Horne-Zeilinger (GHZ) state. While the original estimator outperformed the modified estimators for Pauli measurements, the modified estimators showed improved performance over the original estimator for Clifford measurements. Our findings highlight the importance of tailoring estimators to specific measurement settings to optimize the performance of the classical shadows protocol in practical applications.


Universal low rank matrix recovery from Pauli measurements

Neural Information Processing Systems

We study the problem of reconstructing an unknown matrix M of rank r and dimension d using O(rd poly log d) Pauli measurements. This has applications in quantum state tomography, and is a non-commutative analogue of a well-known problem in compressed sensing: recovering a sparse vector from a few of its Fourier coefficients.


Analyzing Convergence in Quantum Neural Networks: Deviations from Neural Tangent Kernels

You, Xuchen, Chakrabarti, Shouvanik, Chen, Boyang, Wu, Xiaodi

arXiv.org Artificial Intelligence

A quantum neural network (QNN) is a parameterized mapping efficiently implementable on near-term Noisy Intermediate-Scale Quantum (NISQ) computers. It can be used for supervised learning when combined with classical gradient-based optimizers. Despite the existing empirical and theoretical investigations, the convergence of QNN training is not fully understood. Inspired by the success of the neural tangent kernels (NTKs) in probing into the dynamics of classical neural networks, a recent line of works proposes to study over-parameterized QNNs by examining a quantum version of tangent kernels. In this work, we study the dynamics of QNNs and show that contrary to popular belief it is qualitatively different from that of any kernel regression: due to the unitarity of quantum operations, there is a non-negligible deviation from the tangent kernel regression derived at the random initialization. As a result of the deviation, we prove the at-most sublinear convergence for QNNs with Pauli measurements, which is beyond the explanatory power of any kernel regression dynamics. We then present the actual dynamics of QNNs in the limit of over-parameterization. The new dynamics capture the change of convergence rate during training and implies that the range of measurements is crucial to the fast QNN convergence.


Provably efficient machine learning for quantum many-body problems

Huang, Hsin-Yuan, Kueng, Richard, Torlai, Giacomo, Albert, Victor V., Preskill, John

arXiv.org Artificial Intelligence

Classical machine learning (ML) provides a potentially powerful approach to solving challenging quantum many-body problems in physics and chemistry. However, the advantages of ML over more traditional methods have not been firmly established. In this work, we prove that classical ML algorithms can efficiently predict ground state properties of gapped Hamiltonians in finite spatial dimensions, after learning from data obtained by measuring other Hamiltonians in the same quantum phase of matter. In contrast, under widely accepted complexity theory assumptions, classical algorithms that do not learn from data cannot achieve the same guarantee. We also prove that classical ML algorithms can efficiently classify a wide range of quantum phases of matter. Our arguments are based on the concept of a classical shadow, a succinct classical description of a many-body quantum state that can be constructed in feasible quantum experiments and be used to predict many properties of the state. Extensive numerical experiments corroborate our theoretical results in a variety of scenarios, including Rydberg atom systems, 2D random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases.


Estimation of low rank density matrices by Pauli measurements

Xia, Dong

arXiv.org Machine Learning

Density matrices are positively semi-definite Hermitian matrices with unit trace that describe the states of quantum systems. Many quantum systems of physical interest can be represented as high-dimensional low rank density matrices. A popular problem in {\it quantum state tomography} (QST) is to estimate the unknown low rank density matrix of a quantum system by conducting Pauli measurements. Our main contribution is twofold. First, we establish the minimax lower bounds in Schatten $p$-norms with $1\leq p\leq +\infty$ for low rank density matrices estimation by Pauli measurements. In our previous paper, these minimax lower bounds are proved under the trace regression model with Gaussian noise and the noise is assumed to have common variance. In this paper, we prove these bounds under the Binomial observation model which meets the actual model in QST. Second, we study the Dantzig estimator (DE) for estimating the unknown low rank density matrix under the Binomial observation model by using Pauli measurements. In our previous papers, we studied the least squares estimator and the projection estimator, where we proved the optimal convergence rates for the least squares estimator in Schatten $p$-norms with $1\leq p\leq 2$ and, under a stronger condition, the optimal convergence rates for the projection estimator in Schatten $p$-norms with $1\leq p\leq +\infty$. In this paper, we show that the results of these two distinct estimators can be simultaneously obtained by the Dantzig estimator. Moreover, better convergence rates in Schatten norm distances can be proved for Dantzig estimator under conditions weaker than those needed in previous papers. When the objective function of DE is replaced by the negative von Neumann entropy, we obtain sharp convergence rate in Kullback-Leibler divergence.


Estimation of low rank density matrices: bounds in Schatten norms and other distances

Xia, Dong, Koltchinskii, Vladimir

arXiv.org Machine Learning

Let ${\mathcal S}_m$ be the set of all $m\times m$ density matrices (Hermitian positively semi-definite matrices of unit trace). Consider a problem of estimation of an unknown density matrix $\rho\in {\mathcal S}_m$ based on outcomes of $n$ measurements of observables $X_1,\dots, X_n\in {\mathbb H}_m$ (${\mathbb H}_m$ being the space of $m\times m$ Hermitian matrices) for a quantum system identically prepared $n$ times in state $\rho.$ Outcomes $Y_1,\dots, Y_n$ of such measurements could be described by a trace regression model in which ${\mathbb E}_{\rho}(Y_j|X_j)={\rm tr}(\rho X_j), j=1,\dots, n.$ The design variables $X_1,\dots, X_n$ are often sampled at random from the uniform distribution in an orthonormal basis $\{E_1,\dots, E_{m^2}\}$ of ${\mathbb H}_m$ (such as Pauli basis). The goal is to estimate the unknown density matrix $\rho$ based on the data $(X_1,Y_1), \dots, (X_n,Y_n).$ Let $$ \hat Z:=\frac{m^2}{n}\sum_{j=1}^n Y_j X_j $$ and let $\check \rho$ be the projection of $\hat Z$ onto the convex set ${\mathcal S}_m$ of density matrices. It is shown that for estimator $\check \rho$ the minimax lower bounds in classes of low rank density matrices (established earlier) are attained up logarithmic factors for all Schatten $p$-norm distances, $p\in [1,\infty]$ and for Bures version of quantum Hellinger distance. Moreover, for a slightly modified version of estimator $\check \rho$ the same property holds also for quantum relative entropy (Kullback-Leibler) distance between density matrices.


Universal low-rank matrix recovery from Pauli measurements

Liu, Yi-kai

Neural Information Processing Systems

We study the problem of reconstructing an unknown matrix M of rank r and dimension d using O(rd polylog d) Pauli measurements. This has applications in quantum state tomography, and is a non-commutative analogue of a well-known problem in compressed sensing: recovering a sparse vector from a few of its Fourier coefficients. We show that almost all sets of O(rd log^6 d) Pauli measurements satisfy the rank-r restricted isometry property (RIP). This implies that M can be recovered from a fixed ("universal") set of Pauli measurements, using nuclear-norm minimization (e.g., the matrix Lasso), with nearly-optimal bounds on the error. A similar result holds for any class of measurements that use an orthonormal operator basis whose elements have small operator norm. Our proof uses Dudley's inequality for Gaussian processes, together with bounds on covering numbers obtained via entropy duality.


Universal low-rank matrix recovery from Pauli measurements

Liu, Yi-Kai

arXiv.org Machine Learning

We study the problem of reconstructing an unknown matrix M of rank r and dimension d using O(rd poly log d) Pauli measurements. This has applications in quantum state tomography, and is a non-commutative analogue of a well-known problem in compressed sensing: recovering a sparse vector from a few of its Fourier coefficients. We show that almost all sets of O(rd log^6 d) Pauli measurements satisfy the rank-r restricted isometry property (RIP). This implies that M can be recovered from a fixed ("universal") set of Pauli measurements, using nuclear-norm minimization (e.g., the matrix Lasso), with nearly-optimal bounds on the error. A similar result holds for any class of measurements that use an orthonormal operator basis whose elements have small operator norm. Our proof uses Dudley's inequality for Gaussian processes, together with bounds on covering numbers obtained via entropy duality.